The search functionality is under construction.

Author Search Result

[Author] Eiji OKI(86hit)

81-86hit(86hit)

  • An Optimistic Synchronization Based Optimal Server Selection Scheme for Delay Sensitive Communication Services Open Access

    Akio KAWABATA  Bijoy Chand CHATTERJEE  Eiji OKI  

     
    PAPER-Network System

      Pubricized:
    2021/04/09
      Vol:
    E104-B No:10
      Page(s):
    1277-1287

    In distributed processing for communication services, a proper server selection scheme is required to reduce delay by ensuring the event occurrence order. Although a conservative synchronization algorithm (CSA) has been used to achieve this goal, an optimistic synchronization algorithm (OSA) can be feasible for synchronizing distributed systems. In comparison with CSA, which reproduces events in occurrence order before processing applications, OSA can be feasible to realize low delay communication as the processing events arrive sequentially. This paper proposes an optimal server selection scheme that uses OSA for distributed processing systems to minimize end-to-end delay under the condition that maximum status holding time is limited. In other words, the end-to-end delay is minimized based on the allowed rollback time, which is given according to the application designing aspects and availability of computing resources. Numerical results indicate that the proposed scheme reduces the delay compared to the conventional scheme.

  • Performance Analysis of Clos-Network Packet Switch with Virtual Output Queues

    Eiji OKI  Nattapong KITSUWAN  Roberto ROJAS-CESSA  

     
    PAPER-Network System

      Vol:
    E94-B No:12
      Page(s):
    3437-3446

    A three-stage Clos-network switch with input queues is attractive for practical implementation of a large-capacity packet switch. A scheme that configures the first, second, and third stages in that sequence by performing iterative matchings based on random selections is called the staged random scheduling scheme. Despite the usefulness of such a switch, the literature provides no analytical formula that can accurately calculate its throughput. This paper develops a formula to calculate the throughput analysis of the staged random scheduling scheme for one and multiple iterations used in an input-queued Clos-network switch under uniform traffic. This formula can be used to verify simulation models for very large switches. The introduced derivation considers the processes of the selection scheme at each stage of the switch. The derived formula is used in numerical evaluations to show the throughput of large switch sizes. The results show that the staged random scheduling scheme with multiple iterations for a Clos-network switch with VOQs without internal expansion approaches 100% throughput under uniform traffic. Furthermore, evaluations of the derived formulas are used in a practical application to estimate the number of iterations required to achieve 99% throughput for a given switch size. In addition, the staged random scheduling scheme in an input-queued Clos-network switch is modeled and simulated to compare throughput estimations to those obtained with the derived formulas. The simulation results support the correctness of the derived formulas.

  • Generalized Traffic Engineering Protocol for Multi-Layer GMPLS Networks

    Eiji OKI  Daisaku SHIMAZAKI  Kohei SHIOMOTO  Shigeo URUSHIDANI  

     
    PAPER

      Vol:
    E88-B No:10
      Page(s):
    3886-3894

    This paper proposes a Generalized Traffic Engineering Protocol (GTEP). GTEP is a protocol that permits communication between a Path Computation Element (PCE) and a Generalized Multi-Protocol Label Switching (GMPLS) controller (CNTL). The latter is hosted by each GMPLS node; it handles GMPLS and MPLS protocols such as routing and signaling protocols as well as controlling the GMPLS node host. The PCE provides multi-layer traffic engineering; it calculates Label Switched Path (LSP) routes and judges whether a new lower-layer LSP should be established. GTEP functions are implemented in both the PCE and GMPLS router. We demonstrate a multi-layer traffic engineering experiment conducted with GTEP.

  • Optimal Routing by the Intermediate Model -- Joining the Pipe and Hose Models --

    Eiji OKI  Ayako IWAKI  

     
    LETTER-Switching for Communications

      Vol:
    E92-B No:10
      Page(s):
    3247-3251

    This letter presents the optimal routing by the intermediate model; a construction that lies between the pipe and hose models. We show that it is a practical way of realizing optimal routing. A formulation extended from the pipe model to the intermediate model can not be solved as a regular linear programming (LP) problem. Our solution, the introduction of a duality theorem, successfully turns our problem into an LP formulation that can be easily solved. Numerical results show that the intermediate model has better routing performance than the hose model.

  • Packet Processing Architecture with Off-Chip Last Level Cache Using Interleaved 3D-Stacked DRAM Open Access

    Tomohiro KORIKAWA  Akio KAWABATA  Fujun HE  Eiji OKI  

     
    PAPER-Network System

      Pubricized:
    2020/08/06
      Vol:
    E104-B No:2
      Page(s):
    149-157

    The performance of packet processing applications is dependent on the memory access speed of network systems. Table lookup requires fast memory access and is one of the most common processes in various packet processing applications, which can be a dominant performance bottleneck. Therefore, in Network Function Virtualization (NFV)-aware environments, on-chip fast cache memories of a CPU of general-purpose hardware become critical to achieve high performance packet processing speeds of over tens of Gbps. Also, multiple types of applications and complex applications are executed in the same system simultaneously in carrier network systems, which require adequate cache memory capacities as well. In this paper, we propose a packet processing architecture that utilizes interleaved 3 Dimensional (3D)-stacked Dynamic Random Access Memory (DRAM) devices as off-chip Last Level Cache (LLC) in addition to several levels of dedicated cache memories of each CPU core. Entries of a lookup table are distributed in every bank and vault to utilize both bank interleaving and vault-level memory parallelism. Frequently accessed entries in 3D-stacked DRAM are also cached in on-chip dedicated cache memories of each CPU core. The evaluation results show that the proposed architecture reduces the memory access latency by 57%, and increases the throughput by 100% while reducing the blocking probability but about 10% compared to the architecture with shared on-chip LLC. These results indicate that 3D-stacked DRAM can be practical as off-chip LLC in parallel packet processing systems.

  • A New Multiple QoS Control Scheme with Equivalent-Window CAC in ATM Networks

    Eiji OKI  Naoaki YAMANAKA  Kohei SHIOMOTO  Soumyo D. MOITRA  

     
    PAPER-Communication Networks and Services

      Vol:
    E81-B No:7
      Page(s):
    1462-1474

    This paper proposes a multiple QoS control scheme that combines the head-of-line priority (HOLP) discipline with equivalent-window connection admission control (CAC). The proposed scheme can support the different cell loss ratios of both delay-sensitive traffic in high-priority buffers and delay-tolerant traffic in low-priority buffers. The CAC scheme extends a measurement-based CAC algorithm for a single buffer to the low-priority buffer with the HOLP discipline to provide the cell loss ratio objective. We introduce an equivalent window for monitoring low-priority cell streams. The equivalent window size equals the period within which the number of times the low-priority buffer is scanned to read cells is constant. Thus the equivalent window size varies with the high-priority queueing state. Numerical results indicate that the proposed QoS control scheme using the equivalent-window CAC can utilize network resources more effectively than the conventional control scheme which is Virtual Path (VP) separation for different cell loss requirement services. In addition, it is confirmed that the proposed scheme provides conservative admissible loads. Thus, this proposed scheme can achieve large statistical gains while providing both high-priority and low-priority cell loss ratio objectives. The proposed scheme will be very useful for cost-effective multimedia services that have different QoS requirements.

81-86hit(86hit)